Goto

Collaborating Authors

 trusting artificial intelligence


Trusting Artificial Intelligence in Cybersecurity is a Double-edged Sword

#artificialintelligence

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defense strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users' trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats.


Trusting Artificial Intelligence (AI) with Crypto Trades--is it Time to Ditch the Hard Work?

#artificialintelligence

Crypto industry peaks deserve more than two hands to handle; more traders are looking into AI as an advantage. Quick analysis and making the right calls at the right time can be a game-changer. Will AI be the new frontier in the crypto markets? Admittedly, precise calls in the crypto space or any other industry are scarce, with complex graphs and other determinants to contemplate. Decent patterns at the right pace, coupled with a thirsty investor, can always get the job done.


Trusting Artificial Intelligence (AI) with Crypto Trades--is it Time to Ditch the Hard Work?

#artificialintelligence

Trusting Artificial Intelligence Crypto industry peaks deserve more than two hands to handle; more traders are looking into AI as an advanta.

  crypto trade, hard work, trusting artificial intelligence, (1 more...)
  Industry: Banking & Finance > Trading (1.00)

Trusting artificial intelligence in cybersecurity is a double-edged sword

#artificialintelligence

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users' trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats.


New approach needed for defining AI standards in cybersecurity, say Oxford academics

#artificialintelligence

Leading experts in cybersecurity and ethics from Oxford Internet Institute, University of Oxford, Dr Mariarosaria Taddeo and Professor Luciano Floridi, and Professor Tom McCutcheon from Defence Science and Technology Laboratories believe the current approach to defining standards and certification procedures for Artificial Intelligence (AI) systems in cybersecurity is risky and should be replaced with an alternative method. Their new paper "Trusting Artificial Intelligence in Cybersecurity: a Double-Edged Sword", published in the journal Nature Machine Intelligence argues that defining standards based on placing implicit trust in AI systems to perform as expected, without any degree of any monitoring or control, could leave us at risk of new forms of AI attacks, disrupting systems and changing their behaviour. Current'trust' based standards and certification procedures in AI typically see tasks being carried out with either no or minimal control on the way the AI-driven tasks are performed. In their paper, the cybersecurity experts present the case for developing'reliable' rather than trustworthy AI in cybersecurity. The experts argue that reliable AI has greater potential to ensure the successful deployment of AI systems for cybersecurity tasks, making them less vulnerable to cyber-attacks.